115 research outputs found
More Dominantly Truthful Multi-Task Peer Prediction with a Finite Number of Tasks
In the setting where we ask participants multiple similar possibly subjective
multi-choice questions (e.g. Do you like Bulbasaur? Y/N; do you like Squirtle?
Y/N), peer prediction aims to design mechanisms that encourage honest feedback
without verification. A series of works have successfully designed multi-task
peer prediction mechanisms where reporting truthfully is better than any other
strategy (dominantly truthful), while they require an infinite number of tasks.
A recent work proposes the first multi-task peer prediction mechanism,
Determinant Mutual Information (DMI)-Mechanism, where not only is dominantly
truthful but also works for a finite number of tasks (practical). However, the
existence of other practical dominantly-truthful multi-task peer prediction
mechanisms remains to be an open question. This work answers the above question
by providing 1. a new family of information-monotone information measures:
volume mutual information (VMI), where DMI is a special case; 2. a new family
of practical dominantly-truthful multi-task peer prediction mechanisms,
VMI-Mechanisms. To illustrate the importance of VMI-Mechanisms, we also provide
a tractable effort incentive optimization goal. We show that DMI-Mechanism may
not be not optimal but we can construct a sequence of VMI-Mechanisms that are
approximately optimal. The main technical highlight in this paper is a novel
geometric information measure, Volume Mutual Information, that is based on a
simple idea: we can measure an object A's information amount by the number of
objects that is less informative than A. Different densities over the object
lead to different information measures. This also gives Determinant Mutual
Information a simple geometric interpretation
Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks
In the setting where participants are asked multiple similar possibly
subjective multi-choice questions (e.g. Do you like Panda Express? Y/N; do you
like Chick-fil-A? Y/N), a series of peer prediction mechanisms are designed to
incentivize honest reports and some of them achieve dominantly truthfulness:
truth-telling is a dominant strategy and strictly dominate other
"non-permutation strategy" with some mild conditions. However, a major issue
hinders the practical usage of those mechanisms: they require the participants
to perform an infinite number of tasks. When the participants perform a finite
number of tasks, these mechanisms only achieve approximated dominant
truthfulness. The existence of a dominantly truthful multi-task peer prediction
mechanism that only requires a finite number of tasks remains to be an open
question that may have a negative result, even with full prior knowledge.
This paper answers this open question by proposing a new mechanism,
Determinant based Mutual Information Mechanism (DMI-Mechanism), that is
dominantly truthful when the number of tasks is at least 2C and the number of
participants is at least 2. C is the number of choices for each question (C=2
for binary-choice questions). In addition to incentivizing honest reports,
DMI-Mechanism can also be transferred into an information evaluation rule that
identifies high-quality information without verification when there are at
least 3 participants. To the best of our knowledge, DMI-Mechanism is the first
dominantly truthful mechanism that works for a finite number of tasks, not to
say a small constant number of tasks.Comment: To appear in SODA2
Multistable Perception, False Consensus, and Information Complements
This paper presents a distributed communication model to investigate
multistable perception, where a stimulus gives rise to multiple competing
perceptual interpretations. We formalize stable perception as consensus
achieved through components exchanging information. Our key finding is that
relationships between components influence monostable versus multistable
perceptions. When components contain substitute information about the
prediction target, stimuli display monostability. With complementary
information, multistability arises. We then analyze phenomena like order
effects and switching costs. Finally, we provide two additional perspectives.
An optimization perspective balances accuracy and communication costs, relating
stability to local optima. A Prediction market perspective highlights the
strategic behaviors of neural coordination and provides insights into phenomena
like rivalry, inhibition, and mental disorders. The two perspectives
demonstrate how relationships among components influence perception costs, and
impact competition and coordination behaviors in neural dynamics
Eliciting and Aggregating Information: An Information Theoretic Approach
Crowdsourcing---outsourcing tasks to a crowd of workers (e.g. Amazon Mechanical Turk, peer grading for massive open online courseware (MOOCs), scholarly peer review, and Yahoo answers)---is a fast, cheap, and effective method for performing simple tasks even at large scales. Two central problems in this area are:
Information Elicitation: how to design reward systems that incentivize high quality feedback from agents; and
Information Aggregation: how to aggregate the collected feedback to obtain a high quality forecast.
This thesis shows that the combination of game theory, information theory, and learning theory can bring a unified framework to both of the central problems in crowdsourcing area. This thesis builds a natural connection between information elicitation and information aggregation, distills the essence of eliciting and aggregating information to the design of proper information measurements and applies the information measurements to both the central problems:
In the setting where information cannot be verified, this thesis proposes a simple yet powerful information theoretical framework, the emph{Mutual Information Paradigm (MIP)}, for information elicitation mechanisms. The framework pays every agent a measure of mutual information between her signal and a peer's signal. The mutual information measurement is required to have the key property that any ``data processing'' on the two random variables will decrease the mutual information between them. We identify such information measures that generalize Shannon mutual information. MIP overcomes the two main challenges in information elicitation without verification: (1) how to incentivize effort and avoid agents colluding to report random or identical responses (2) how to motivate agents who believe they are in the minority to report truthfully.
To elicit expertise without verification, this thesis also defines a natural model for this setting based on the assumption that emph{more sophisticated agents know the beliefs of less sophisticated agents} and extends MIP to a mechanism design framework, the emph{Hierarchical Mutual Information Paradigm (HMIP)}, for this setting.
Aided by the information measures and the frameworks, this thesis (1) designs several novel information elicitation mechanisms (e.g. the disagreement mechanism, the -mutual information mechanism, the multi-hierarchical mutual information mechanism, the common ground mechanism) in various of settings such that honesty and efforts are incentivized and expertise is identified; (2) addresses an important unsupervised learning problem---co-training by reducing it to an information elicitation problem---forecast elicitation without verification.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145809/1/yuqkong_1.pd
Equilibrium Selection in Information Elicitation without Verification via Information Monotonicity
In this paper, we propose a new mechanism - the Disagreement Mechanism - which elicits privately-held, non-variable information from self-interested agents in the single question (peer-prediction) setting.
To the best of our knowledge, our Disagreement Mechanism is the first strictly truthful mechanism in the single-question setting that is simultaneously:
- Detail-Free: does not need to know the common prior;
- Focal: truth-telling pays strictly higher than any other symmetric equilibria excluding some unnatural permutation equilibria;
- Small group: the properties of the mechanism hold even for a small number of agents, even in binary signal setting. Our mechanism only asks each agent her signal as well as a forecast of the other agents\u27 signals.
Additionally, we show that the focal result is both tight and robust, and we extend it to the case of asymmetric equilibria when the number of agents is sufficiently large
Optimizing Bayesian Information Revelation Strategy in Prediction Markets: the Alice Bob Alice Case
Prediction markets provide a unique and compelling way to sell and aggregate information, yet a good understanding of optimal strategies for agents participating in such markets remains elusive. To model this complex setting, prior work proposes a three stages game called the Alice Bob Alice (A-B-A) game - Alice participates in the market first, then Bob joins, and then Alice has a chance to participate again. While prior work has made progress in classifying the optimal strategy for certain interesting edge cases, it remained an open question to calculate Alice\u27s best strategy in the A-B-A game for a general information structure.
In this paper, we analyze the A-B-A game for a general information structure and (1) show a "revelation-principle" style result: it is enough for Alice to use her private signal space as her announced signal space, that is, Alice cannot gain more by revealing her information more "finely"; (2) provide a FPTAS to compute the optimal information revelation strategy with additive error when Alice\u27s information is a signal from a constant-sized set; (3) show that sometimes it is better for Alice to reveal partial information in the first stage even if Alice\u27s information is a single binary bit
- …